时空数据包含丰富的信息,近年来由于许多领域的相关应用程序的快速发展,近年来已广泛研究。例如,医疗机构经常使用与患者不同部位相关的电极来分析具有空间和时间特征富含脑的数据,以进行健康评估和疾病诊断。现有的研究主要使用了深度学习技术,例如卷积神经网络(CNN)或经常性神经网络(RNN)来提取隐藏的时空特征。然而,同时合并相互依存的空间信息和动态时间变化是一项挑战。实际上,对于利用这些时空特征来完成复杂预测任务的模型,它通常需要大量的培训数据才能获得令人满意的模型性能。考虑到上述挑战,我们提出了一个自适应的联合相关性框架,即Fedrel,用于在本文中为时空的图形学习。在将原始时空数据转换为高质量特征之后,框架中的核心动力学间图(DIIG)模块能够使用这些功能来生成能够捕获隐藏拓扑和长期的时空图这些图中的时间相关信息。为了提高模型的概括能力和性能,在保留本地数据隐私的同时,我们还设计了一个相关性驱动的联合学习模块,以利用其模型的细心聚合来利用来自不同参与者的各种数据分布。
translated by 谷歌翻译
近年来,图表表示学习越来越多地引起了越来越长的关注,特别是为了在节点和图表水平上学习对分类和建议任务的低维嵌入。为了能够在现实世界中的大规模图形数据上学习表示,许多研究专注于开发不同的抽样策略,以方便培训过程。这里,我们提出了一种自适应图策略驱动的采样模型(GPS),其中通过自适应相关计算实现了本地邻域中每个节点的影响。具体地,邻居的选择是由自适应策略算法指导的,直接贡献到消息聚合,节点嵌入更新和图级读出步骤。然后,我们从各种角度对图表分类任务进行全面的实验。我们所提出的模型在几个重要的基准测试中优于现有的3%-8%,实现了现实世界数据集的最先进的性能。
translated by 谷歌翻译
我们为图形神经网络提供了一个空间的联合学习框架,即STFL。该框架探讨了输入空间 - 时间数据的潜在相关性,并将其转换为节点特征和邻接矩阵。框架中的联合学习设置可确保数据隐私,同时实现了良好的模型泛化。实验结果在睡眠阶段数据集ISRUC_S3上,说明了STFL对图形预测任务的有效性。
translated by 谷歌翻译
陆地 - 空中双模车辆在学术界和工业中绽放,因为它们融入了空中车辆的高流动性和地面车辆的长期耐力。在这项工作中,我们提出了一种自主和自适应的导航框架,为这类车辆带来完全自主权。该框架主要包括1)分层运动规划器,在未知环境中产生安全和低功率的地面 - 鸟轨迹,2)统一运动控制器,其动态地调整陆地运动中的能量消耗。广泛的现实实验和基准比较是在定制的机器人平台上进行的,以验证所提出的框架的稳健性和性能。在测试期间,机器人安全地穿越了陆地集成流动性的复杂环境,并在地面运动中实现了7美元的节能。最后,我们将为社区的引用发出我们的代码和硬件配置。
translated by 谷歌翻译
Deepfakes的恶意应用(即,从面部图像产生目标面部属性或整个面部的技术)对个人的声誉和安全构成了巨大的威胁。为了减轻这些威胁,最近的研究已经提出了对抗DeepFake模型的对抗水印,导致它们产生扭曲的输出。尽管结果令人印象深刻,但这些对抗水印具有低的图像水平和模型级可转移性,这意味着它们可以仅保护一个特定的DeepFake模型的一个面部图像。为了解决这些问题,我们提出了一种新的解决方案,可以产生跨模型通用对抗水印(CMUA-Watermark),保护来自多个DeepFake模型的大量面部图像。具体而言,我们首先提出跨模型通用攻击管道,迭代地攻击多个DeepFake模型。然后,我们设计了一种双层扰动融合策略,以减轻不同面部图像和模型产生的对抗水印之间的冲突。此外,我们通过启发式方法解决了跨模型优化的关键问题,以自动找到不同型号的合适的攻击步骤尺寸,进一步削弱了模型级冲突。最后,我们介绍了一种更合理和全面的评估方法来完全测试所提出的方法并将其与现有的方法进行比较。广泛的实验结果表明,所提出的CMUA-Watermark可以有效地扭曲由多个DeepFake模型产生的假面部图像,同时实现比现有方法更好的性能。
translated by 谷歌翻译
相同地形的不同卫星图像的相对辐射归一化(RRN)对于改变检测,对象分类/分割和映射任务是必要的。但是,传统的RRN模型不强大,通过对象变化扰乱,并且RRN模型精确考虑对象变化无法鲁布布地获取无更改集。本文提出了通过潜在变化噪声建模的自动稳健的相对辐射归一化方法。它们利用先验知识,即在相对辐射尺度化下没有变化点具有小尺度噪声,并且在辐射归一化之后,变化点具有大规模的辐射噪声,组合随机期望最大化方法快速且强大地提取No-Change集以学习相对辐射归一化映射映射函数。这使我们的模型在理论上就是关于概率理论和数学扣除的基础。具体地,当我们选择直方图匹配作为与高斯噪声(HM-RRN-RRN-RRN-MOG)混合的相对辐射算法学习方案(HM-RRN-MOG)的相对辐射归一化学习方案,HM-RRN-MOG模型实现了最佳性能。我们的模型具有强大地反对云/雾气/变化的能力。我们的方法自然地为RRN生成一个强大的评估指示器,即No-Change Set Totor Square error。我们将HM-RRN-MOG模型应用于后一种植被/水变化检测任务,这减少了无辐射对比度和NDVI / NDWI对无变化集的差异,产生了一致和可比的结果。我们利用No-Change集合到建筑物变更检测任务中,有效地减少了伪变化并提高了精度。
translated by 谷歌翻译
虽然有很多关于图像深度学习的硬件加速研究,但在加速涉及图形的深度学习应用时,有一个相当有利的专注。图的独特特性,例如不规则的内存访问和动态并行性,当算法映射到CPU或GPU时,施加有几个挑战。为了在利用所有可用的稀疏性的同时解决这些挑战,我们提出了一种灵活的架构,称为SPA-GCN,用于加速图形卷积网络(GCN),在图中的深度学习算法中的核心计算单元。该架构专门用于处理许多小图形,因为图表尺寸对设计考虑产生了重大影响。在这种情况下,我们使用SIMGNN是一种基于神经网络的图形匹配算法,作为展示我们架构的有效性的案例研究。实验结果表明,与多核CPU实施和GPU实施相比,SPA-GCN可以提供高速度,显示设计效率。
translated by 谷歌翻译
Masked image modeling (MIM) performs strongly in pre-training large vision Transformers (ViTs). However, small models that are critical for real-world applications cannot or only marginally benefit from this pre-training approach. In this paper, we explore distillation techniques to transfer the success of large MIM-based pre-trained models to smaller ones. We systematically study different options in the distillation framework, including distilling targets, losses, input, network regularization, sequential distillation, etc, revealing that: 1) Distilling token relations is more effective than CLS token- and feature-based distillation; 2) An intermediate layer of the teacher network as target perform better than that using the last layer when the depth of the student mismatches that of the teacher; 3) Weak regularization is preferred; etc. With these findings, we achieve significant fine-tuning accuracy improvements over the scratch MIM pre-training on ImageNet-1K classification, using all the ViT-Tiny, ViT-Small, and ViT-base models, with +4.2%/+2.4%/+1.4% gains, respectively. Our TinyMIM model of base size achieves 52.2 mIoU in AE20K semantic segmentation, which is +4.1 higher than the MAE baseline. Our TinyMIM model of tiny size achieves 79.6% top-1 accuracy on ImageNet-1K image classification, which sets a new record for small vision models of the same size and computation budget. This strong performance suggests an alternative way for developing small vision Transformer models, that is, by exploring better training methods rather than introducing inductive biases into architectures as in most previous works. Code is available at https://github.com/OliverRensu/TinyMIM.
translated by 谷歌翻译
In this paper, we propose a robust 3D detector, named Cross Modal Transformer (CMT), for end-to-end 3D multi-modal detection. Without explicit view transformation, CMT takes the image and point clouds tokens as inputs and directly outputs accurate 3D bounding boxes. The spatial alignment of multi-modal tokens is performed implicitly, by encoding the 3D points into multi-modal features. The core design of CMT is quite simple while its performance is impressive. CMT obtains 73.0% NDS on nuScenes benchmark. Moreover, CMT has a strong robustness even if the LiDAR is missing. Code will be released at https://github.com/junjie18/CMT.
translated by 谷歌翻译
Dataset distillation has emerged as a prominent technique to improve data efficiency when training machine learning models. It encapsulates the knowledge from a large dataset into a smaller synthetic dataset. A model trained on this smaller distilled dataset can attain comparable performance to a model trained on the original training dataset. However, the existing dataset distillation techniques mainly aim at achieving the best trade-off between resource usage efficiency and model utility. The security risks stemming from them have not been explored. This study performs the first backdoor attack against the models trained on the data distilled by dataset distillation models in the image domain. Concretely, we inject triggers into the synthetic data during the distillation procedure rather than during the model training stage, where all previous attacks are performed. We propose two types of backdoor attacks, namely NAIVEATTACK and DOORPING. NAIVEATTACK simply adds triggers to the raw data at the initial distillation phase, while DOORPING iteratively updates the triggers during the entire distillation procedure. We conduct extensive evaluations on multiple datasets, architectures, and dataset distillation techniques. Empirical evaluation shows that NAIVEATTACK achieves decent attack success rate (ASR) scores in some cases, while DOORPING reaches higher ASR scores (close to 1.0) in all cases. Furthermore, we conduct a comprehensive ablation study to analyze the factors that may affect the attack performance. Finally, we evaluate multiple defense mechanisms against our backdoor attacks and show that our attacks can practically circumvent these defense mechanisms.
translated by 谷歌翻译